345 research outputs found

    A Complete Enumeration and Classification of Two-Locus Disease Models

    Full text link
    There are 512 two-locus, two-allele, two-phenotype, fully-penetrant disease models. Using the permutation between two alleles, between two loci, and between being affected and unaffected, one model can be considered to be equivalent to another model under the corresponding permutation. These permutations greatly reduce the number of two-locus models in the analysis of complex diseases. This paper determines the number of non-redundant two-locus models (which can be 102, 100, 96, 51, 50, or 48, depending on which permutations are used, and depending on whether zero-locus and single-locus models are excluded). Whenever possible, these non-redundant two-locus models are classified by their property. Besides the familiar features of multiplicative models (logical AND), heterogeneity models (logical OR), and threshold models, new classifications are added or expanded: modifying-effect models, logical XOR models, interference and negative interference models (neither dominant nor recessive), conditionally dominant/recessive models, missing lethal genotype models, and highly symmetric models. The following aspects of two-locus models are studied: the marginal penetrance tables at both loci, the expected joint identity-by-descent probabilities, and the correlation between marginal identity-by-descent probabilities at the two loci. These studies are useful for linkage analyses using single-locus models while the underlying disease model is two-locus, and for correlation analyses using the linkage signals at different locations obtained by a single-locus model.Comment: LaTeX, to be published in Human Heredit

    Diminishing Return for Increased Mappability with Longer Sequencing Reads: Implications of the k-mer Distributions in the Human Genome

    Get PDF
    The amount of non-unique sequence (non-singletons) in a genome directly affects the difficulty of read alignment to a reference assembly for high throughput-sequencing data. Although a greater length increases the chance for reads being uniquely mapped to the reference genome, a quantitative analysis of the influence of read lengths on mappability has been lacking. To address this question, we evaluate the k-mer distribution of the human reference genome. The k-mer frequency is determined for k ranging from 20 to 1000 basepairs. We use the proportion of non-singleton k-mers to evaluate the mappability of reads for a corresponding read length. We observe that the proportion of non-singletons decreases slowly with increasing k, and can be fitted by piecewise power-law functions with different exponents at different k ranges. A faster decay at smaller values for k indicates more limited gains for read lengths > 200 basepairs. The frequency distributions of k-mers exhibit long tails in a power-law-like trend, and rank frequency plots exhibit a concave Zipf's curve. The location of the most frequent 1000-mers comprises 172 kilobase-ranged regions, including four large stretches on chromosomes 1 and X, containing genes with biomedical implications. Even the read length 1000 would be insufficient to reliably sequence these specific regions.Comment: 5 figure

    Does Logarithm Transformation of Microarray Data Affect Ranking Order of Differentially Expressed Genes?

    Full text link
    A common practice in microarray analysis is to transform the microarray raw data (light intensity) by a logarithmic transformation, and the justification for this transformation is to make the distribution more symmetric and Gaussian-like. Since this transformation is not universally practiced in all microarray analysis, we examined whether the discrepancy of this treatment of raw data affect the "high level" analysis result. In particular, whether the differentially expressed genes as obtained by tt-test, regularized t-test, or logistic regression have altered rank orders due to presence or absence of the transformation. We show that as much as 20%--40% of significant genes are "discordant" (significant only in one form of the data and not in both), depending on the test being used and the threshold value for claiming significance. The t-test is more likely to be affected by logarithmic transformation than logistic regression, and regularized tt-test more affected than t-test. On the other hand, the very top ranking genes (e.g. up to top 20--50 genes, depending on the test) are not affected by the logarithmic transformation.Comment: submitted to IEEE/EMBS Conference'0

    Comparing single-nucleotide polymorphism marker-based and microsatellite marker-based linkage analyses

    Get PDF
    We compared linkage analysis results for an alcoholism trait, ALDX1 (DSM-III-R and Feigner criteria) using a nonparametric linkage analysis method, which takes into account allele sharing among several affected persons, for both microsatellite and single-nucleotide polymorphism (SNP) markers (Affymetrix and Illumina) in the Collaborative Study on the Genetics of Alcoholism (COGA) dataset provided to participants at the Genetic Analysis Workshop 14 (GAW14). The two sets of linkage results from the dense Affymetrix SNP markers and less densely spaced Illumina SNP markers are very similar. The linkage analysis results from microsatellite and SNP markers are generally similar, but the match is not perfect. Strong linkage peaks were found on chromosome 7 in three sets of linkage analyses using both SNP and microsatellite marker data. We also observed that for SNP markers, using the given genetic map and using the map by converting 1 megabase pair (1 Mb) to 1 centimorgan (cM), did not change the linkage results. We recommend the use of the 1 Mb-to-1 cM converted map in a first round of linkage analysis with SNP markers in which map integration is an issue
    • …
    corecore